Reinforcement Learning State Estimator
نویسندگان
چکیده
In this study, we propose a novel use of reinforcement learning for estimating hidden variables and parameters of nonlinear dynamical systems. A critical issue in hidden-state estimation is that we cannot directly observe estimation errors. However, by defining errors of observable variables as a delayed penalty, we can apply a reinforcement learning frame-work to state estimation problems. Specifically, we derive a method to construct a nonlinear state estimator by finding an appropriate feedback input gain using the policy gradient method. We tested the proposed method on single pendulum dynamics and show that the joint angle variable could be successfully estimated by observing only the angular velocity, and vice versa. In addition, we show that we could acquire a state estimator for the pendulum swing-up task in which a swing-up controller is also acquired by reinforcement learning simultaneously. Furthermore, we demonstrate that it is possible to estimate the dynamics of the pendulum itself while the hidden variables are estimated in the pendulum swing-up task. Application of the proposed method to a two-linked biped model is also presented.
منابع مشابه
Inverse reinforcement learning in continuous time and space
This paper develops a data-driven inverse reinforcement learning technique for a class of linear systems to estimate the cost function of an agent online, using input-output measurements. A simultaneous state and parameter estimator is utilized to facilitate output-feedback inverse reinforcement learning, and cost function estimation is achieved up to multiplication by a constant.
متن کاملImportance sampling for reinforcement learning with multiple objectives
This thesis considers three complications that arise from applying reinforcement learning to a real-world application. In the process of using reinforcement learning to build an adaptive electronic market-maker, we find the sparsity of data, the partial observability of the domain, and the multiple objectives of the agent to cause serious problems for existing reinforcement learning algorithms....
متن کاملFeed-Forward Learning: Fast Reinforcement Learning of Controllers
Reinforcement Learning (RL) approaches are, very often, rendered useless by the statistics of the required sampling process. This paper shows how very fast RL is essentially made possible by abandoning the state feedback during training episodes. The resulting new method, feed-forward learning (FF learning), employs a return estimator for pairs of a state and a feed-forward policy’s parameter v...
متن کاملThe Mirage of Action-Dependent Baselines in Reinforcement Learning
Policy gradient methods are a widely used class of model-free reinforcement learning algorithms where a state-dependent baseline is used to reduce gradient estimator variance. Several recent papers extend the baseline to depend on both the state and action and suggest that this significantly reduces variance and improves sample efficiency without introducing bias into the gradient estimates. To...
متن کاملTwo Steps Reinforcement Learning in Continuous Reinforcement Learning Tasks
Two steps reinforcement learning is a technique that combines an iterative refinement of a Q function estimator that can be used to obtains a state space discretization with classical reinforcement learning algorithms like Q-learning or Sarsa. However, the method requires a discrete reward function that permits learning an approximation of the Q function using classification algorithms. However...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neural computation
دوره 19 3 شماره
صفحات -
تاریخ انتشار 2007